(Looked through the templates, and did not find one for this journal.)
My choice of formative assessment is on lab reports, in particular how data is graphed and how errors are detected and then allowed for, or reduced.
Our Conceptual Physics classes use a lot of simple labs, both to reinforce a subject point, and to develop experience in findout out for yourself. The labs are more directed than inquiry, at least for now.
I always emphasize that graphs should fit a plausible line, not "follow the dots" through every data point.
The graph should "tell the story" and the written material should of course add detail.
A typical story for a recent lab would be "Bigger boats have more weight carrying capacity, and any given boat has more weight carrying capacity in salt water than it has in fresh water."
The graph would show this by a line fitted through (not connecting) measured capacity for several toy boats vs their maximum displaced volumes.
The graph would show a line for fresh water, and another line for salt water, not one line down the middle of the data.
The lines should not be parallel, they should be trending toward an intersection, where a zero displacement boat has no carrying capacity.
Something I did not do at first, but will going forward, is to post and pool everyone's data. In some cases this will give multiple readings for the same value(s) and in other labs it allows the class to cover more values of an Independent Variable than each team could do alone.
Either way, it makes errors more apparent. (It can also make error-prone lab teams more visible, which was not my intent.)
At this stage we are going for confirmation of a concept, not precise values, and using simple / improvised equipment, which almost guarantees some amount of error. It is still instructive to see how errors come in, and how they can be reduced.
The grading rubric for lab reports does include all of the above, but perhaps not as strongly as it should. (My view is that a new person making big changes is probably making big mistakes, that others already know about. I don't want to do that.)
My choice of formative assessment is on lab reports, in particular how data is graphed and how errors are detected and then allowed for, or reduced.
Our Conceptual Physics classes use a lot of simple labs, both to reinforce a subject point, and to develop experience in findout out for yourself. The labs are more directed than inquiry, at least for now.
I always emphasize that graphs should fit a plausible line, not "follow the dots" through every data point.
The graph should "tell the story" and the written material should of course add detail.
A typical story for a recent lab would be "Bigger boats have more weight carrying capacity, and any given boat has more weight carrying capacity in salt water than it has in fresh water."
Something I did not do at first, but will going forward, is to post and pool everyone's data. In some cases this will give multiple readings for the same value(s) and in other labs it allows the class to cover more values of an Independent Variable than each team could do alone.
Either way, it makes errors more apparent. (It can also make error-prone lab teams more visible, which was not my intent.)
At this stage we are going for confirmation of a concept, not precise values, and using simple / improvised equipment, which almost guarantees some amount of error. It is still instructive to see how errors come in, and how they can be reduced.
The grading rubric for lab reports does include all of the above, but perhaps not as strongly as it should. (My view is that a new person making big changes is probably making big mistakes, that others already know about. I don't want to do that.)